2 research outputs found
Better Sparsifiers for Directed Eulerian Graphs
Spectral sparsification for directed Eulerian graphs is a key component in
the design of fast algorithms for solving directed Laplacian linear systems.
Directed Laplacian linear system solvers are crucial algorithmic primitives to
fast computation of fundamental problems on random walks, such as computing
stationary distribution, hitting and commute time, and personalized PageRank
vectors. While spectral sparsification is well understood for undirected graphs
and it is known that for every graph -sparsifiers with
edges exist [Batson-Spielman-Srivastava, STOC '09]
(which is optimal), the best known constructions of Eulerian sparsifiers
require edges and are based on short-cycle
decompositions [Chu et al., FOCS '18].
In this paper, we give improved constructions of Eulerian sparsifiers,
specifically:
1. We show that for every directed Eulerian graph there exist an
Eulerian sparsifier with edges. This result is based on combining
short-cycle decompositions [Chu-Gao-Peng-Sachdeva-Sawlani-Wang, FOCS '18,
SICOMP] and [Parter-Yogev, ICALP '19], with recent progress on the matrix
Spencer conjecture [Bansal-Meka-Jiang, STOC '23].
2. We give an improved analysis of the constructions based on short-cycle
decompositions, giving an -time algorithm for any constant
for constructing Eulerian sparsifiers with
edges
Training Private Models That Know What They Don't Know
Training reliable deep learning models which avoid making overconfident but
incorrect predictions is a longstanding challenge. This challenge is further
exacerbated when learning has to be differentially private: protection provided
to sensitive data comes at the price of injecting additional randomness into
the learning process. In this work, we conduct a thorough empirical
investigation of selective classifiers -- that can abstain when they are unsure
-- under a differential privacy constraint. We find that several popular
selective prediction approaches are ineffective in a differentially private
setting as they increase the risk of privacy leakage. At the same time, we
identify that a recent approach that only uses checkpoints produced by an
off-the-shelf private learning algorithm stands out as particularly suitable
under DP. Further, we show that differential privacy does not just harm utility
but also degrades selective classification performance. To analyze this effect
across privacy levels, we propose a novel evaluation mechanism which isolate
selective prediction performance across model utility levels. Our experimental
results show that recovering the performance level attainable by non-private
models is possible but comes at a considerable coverage cost as the privacy
budget decreases